run ai model
Analogue chips can slash the energy used to run AI models
An analogue computer chip can run an artificial intelligence (AI) speech recognition model 14 times more efficiently than traditional chips, potentially offering a solution to the vast and growing energy use of AI research and to the worldwide shortage of the digital chips usually used. The device was developed by IBM Research, which declined New Scientist's request for an interview and didn't provide any comment. But in a paper outlining the work, researchers claim that the analogue chip can reduce bottlenecks in AI development. There is a global rush for GPU chips, the graphic processors that were originally designed to run video games and have also traditionally been used to train and run AI models, with demand outstripping supply. Studies have also shown that the energy use of AI is rapidly growing, rising 100-fold from 2012 to 2021, with most of that energy derived from fossil fuels. These issues have led to suggestions that the constantly increasing scale of AI models will soon reach an impasse.
Litmus Helps CHIMEI Power Artificial Intelligence at the Edge
Litmus, the Edge Data Platform for Industry 4.0, today announced CHIMEI, a leading Taiwan-based performance materials manufacturer, has deployed Litmus Edge to power artificial intelligence at the edge. CHIMEI will deploy Litmus Edge to many of their plants to expand factory data collection and run AI models at the edge to improve quality in production processes. Previously CHIMEI was collecting data and running AI models for each of their production processes separately using different edge devices, which was not efficient and maintenance costs were high. Complex integration with multiple AI model servers prompted them to look for a new solution that would allow them to collect data from more machines and provide AI model runtime functionality. CHIMEI chose Litmus Edge to consolidate production processes and run most AI models from the same edge device.
Google unveils new tools to bolster AI hardware development
Google continues to expand its range of AI products and services with a trio of new hardware devices aimed at the development community. The devices don't seem to have been officially announced yet and were first spotted by Hackster. They're being introduced under a new Google Coral brand (which is itself still "in beta"), and include a development board that sells for $149.99, a USB accelerator that goes for $74.99, and a 5-megapixel camera that's available for $24.99. Both dev board and accelerator are powered by Google's Edge TPU chips, which are ASIC processors no bigger than your fingernail that are designed to run AI models without breaking a sweat. The camera, meanwhile, is as an add-on for the dev board.
How 3 developers used Core ML to run AI models on an iPhone
Apple's first iPhone launched in 2007, decades after the concept of machine learning -- a subset of artificial intelligence (AI) that employs mathematical techniques that "teach" software to make sense of complicated datasets -- rose to prominence. But it was only recently that the two collided. Apple launched Core ML, a framework designed to speed up machine learning tasks, alongside iOS 11 in May 2017. The Cupertino company shipped its first chip purpose-built for AI, the A11 Bionic, in last year's iPhone X. And at the 2018 Worldwide Developers Conference (WWDC), it took the wraps off Core ML 2, a new and improved version of Core ML; and Create ML, a GPU-accelerated tool for native AI model training on Macs.
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)